40 research outputs found

    Enriching mobile interaction with garment-based wearable computing devices

    Get PDF
    Wearable computing is on the brink of moving from research to mainstream. The first simple products, such as fitness wristbands and smart watches, hit the mass market and achieved considerable market penetration. However, the number and versatility of research prototypes in the field of wearable computing is far beyond the available devices on the market. Particularly, smart garments as a specific type of wearable computer, have high potential to change the way we interact with computing systems. Due to the proximity to the user`s body, smart garments allow to unobtrusively sense implicit and explicit user input. Smart garments are capable of sensing physiological information, detecting touch input, and recognizing the movement of the user. In this thesis, we explore how smart garments can enrich mobile interaction. Employing a user-centered design process, we demonstrate how different input and output modalities can enrich interaction capabilities of mobile devices such as mobile phones or smart watches. To understand the context of use, we chart the design space for mobile interaction through wearable devices. We focus on the device placement on the body as well as interaction modality. We use a probe-based research approach to systematically investigate the possible inputs and outputs for garment based wearable computing devices. We develop six different research probes showing how mobile interaction benefits from wearable computing devices and what requirements these devices pose for mobile operating systems. On the input side, we look at explicit input using touch and mid-air gestures as well as implicit input using physiological signals. Although touch input is well known from mobile devices, the limited screen real estate as well as the occlusion of the display by the input finger are challenges that can be overcome with touch-enabled garments. Additionally, mid-air gestures provide a more sophisticated and abstract form of input. We present a gesture elicitation study to address the special requirements of mobile interaction and present the resulting gesture set. As garments are worn, they allow different physiological signals to be sensed. We explore how we can leverage these physiological signals for implicit input. We conduct a study assessing physiological information by focusing on the workload of drivers in an automotive setting. We show that we can infer the driver´s workload using these physiological signals. Beside the input capabilities of garments, we explore how garments can be used as output. We present research probes covering the most important output modalities, namely visual, auditory, and haptic. We explore how low resolution displays can serve as a context display and how and where content should be placed on such a display. For auditory output, we investigate a novel authentication mechanism utilizing the closeness of wearable devices to the body. We show that by probing audio cues through the head of the user and re-recording them, user authentication is feasible. Last, we investigate EMS as a haptic feedback method. We show that by actuating the user`s body, an embodied form of haptic feedback can be achieved. From the aforementioned research probes, we distilled a set of design recommendations. These recommendations are grouped into interaction-based and technology-based recommendations and serve as a basis for designing novel ways of mobile interaction. We implement a system based on these recommendations. The system supports developers in integrating wearable sensors and actuators by providing an easy to use API for accessing these devices. In conclusion, this thesis broadens the understanding of how garment-based wearable computing devices can enrich mobile interaction. It outlines challenges and opportunities on an interaction and technological level. The unique characteristics of smart garments make them a promising technology for making the next step in mobile interaction

    Brainatwork: Logging Cognitive Engagement and Tasks in the Workplace Using Electroencephalography

    Get PDF
    Today's workplaces are dynamic and complex. Digital data sources such as email and video conferencing aim to support workers but also add to their burden of multitasking. Psychophysiological sensors such as Electroencephalography (EEG) can provide users with cues about their cognitive state. We introduce BrainAtWork, a workplace engagement and task logger which shows users their cognitive state while working on different tasks. In a lab study with eleven participants working on their own real-world tasks, we gathered 16 hours of EEG and PC logs which were labeled into three classes: central, peripheral and meta work. We evaluated the usability of BrainAtWork via questionnaires and interviews. We investigated the correlations between measured cognitive engagement from EEG and subjective responses from experience sampling probes. Using random forests classification, we show the feasibility of automatically labeling work tasks into work classes. We discuss how BrainAtWork can support workers on the long term through encouraging reflection and helping in task scheduling

    Investigating User Needs for Bio-sensing and Affective Wearables

    Get PDF
    Bio-sensing wearables are currently advancing to provide users with a lot of information about their physiological and affective states. However, relatively little is known about users' interest in acquiring, sharing and receiving this information and through which channels and modalities. To close this gap, we report on the results of an online survey (N=109) exploring principle aspects of the design space of wearables such as data types, contexts, feedback modalities and sharing behaviors. Results show that users are interested in obtaining physiological, emotional and cognitive data through modalities beyond traditional touchscreen output. Valence of the information, whether positive or negative affects the sharing behaviors

    Memorability of cued-recall graphical passwords with saliency masks

    Get PDF
    Cued-recall graphical passwords have a lot of potential for secure user authentication, particularly if combined with saliency masks to prevent users from selecting weak passwords. Saliency masks were shown to significantly improve password security by excluding those areas of the image that are most likely to lead to hotspots. In this paper we investigate the impact of such saliency masks on the memorability of cued-recall graphical passwords. We first conduct two pre-studies (N=52) to obtain a set of images with three different image complexities as well as real passwords. A month-long user study (N=26) revealed that there is a strong learning effect for graphical passwords, in particular if defined on images with a saliency mask. While for complex images, the learning curve is steeper than for less complex ones, they best supported memorability in the long term, most likely because they provided users more alternatives to select memorable password points. These results complement prior work on the security of such passwords and underline the potential of saliency masks as both a secure and usable improvement to cued-recall gaze-based graphical passwords

    How to Communicate Robot Motion Intent: A Scoping Review

    Full text link
    Robots are becoming increasingly omnipresent in our daily lives, supporting us and carrying out autonomous tasks. In Human-Robot Interaction, human actors benefit from understanding the robot's motion intent to avoid task failures and foster collaboration. Finding effective ways to communicate this intent to users has recently received increased research interest. However, no common language has been established to systematize robot motion intent. This work presents a scoping review aimed at unifying existing knowledge. Based on our analysis, we present an intent communication model that depicts the relationship between robot and human through different intent dimensions (intent type, intent information, intent location). We discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model, allowing us to identify key patterns and possible directions for future research.Comment: Interactive Data Visualization of the Paper Corpus: https://rmi.robot-research.d

    HaptiX: Vibrotactile Haptic Feedback for Communication of 3D Directional Cues

    Full text link
    In Human-Computer-Interaction, vibrotactile haptic feedback offers the advantage of being independent of any visual perception of the environment. Most importantly, the user's field of view is not obscured by user interface elements, and the visual sense is not unnecessarily strained. This is especially advantageous when the visual channel is already busy, or the visual sense is limited. We developed three design variants based on different vibrotactile illusions to communicate 3D directional cues. In particular, we explored two variants based on the vibrotactile illusion of the cutaneous rabbit and one based on apparent vibrotactile motion. To communicate gradient information, we combined these with pulse-based and intensity-based mapping. A subsequent study showed that the pulse-based variants based on the vibrotactile illusion of the cutaneous rabbit are suitable for communicating both directional and gradient characteristics. The results further show that a representation of 3D directions via vibrations can be effective and beneficial.Comment: CHI EA '23, April 23-28, 2023, Hamburg, German

    Stay Cool! Understanding Thermal Attacks on Mobile-based User Authentication

    Get PDF
    PINs and patterns remain among the most widely used knowledge-based authentication schemes. As thermal cameras become ubiquitous and affordable, we foresee a new form of threat to user privacy on mobile devices. Thermal cameras allow performing thermal attacks, where heat traces, resulting from authentication, can be used to reconstruct passwords. In this work we investigate in details the viability of exploiting thermal imaging to infer PINs and patterns on mobile devices. We present a study (N=18) where we evaluated how properties of PINs and patterns influence their thermal attacks resistance. We found that thermal attacks are indeed viable on mobile devices; overlapping patterns significantly decrease successful thermal attack rate from 100% to 16.67%, while PINs remain vulnerable (>72% success rate) even with duplicate digits. We conclude by recommendations for users and designers of authentication schemes on how to resist thermal attacks

    Perceiving layered information on 3D displays using binocular disparity

    Full text link
    stuttgart.de 3D displays are hitting the mass market. They are integrated in consumer TVs, notebooks, and mobile phones and are mainly used for virtual reality as well as video content. We see large potential in using depth also for structuring information. Our specific use case is 3D displays integrated in cars. The capabilities of such dis-plays could be used to present relevant information to the driver in a fast and easy-to-understand way, e.g., by functionality-based clus-tering. However, excessive parallaxes can cause discomfort and in turn negatively influence the primary driving task. This requires a reasonable choice of parallax boundaries. The contribution of this paper is twofold. First, we identify the comfort zone when perceiv-ing 3D content. Second, we determine a minimum depth distance between objects that still enables users to quickly and accurately separate the two depth planes. The results yield that in terms of task completion time the optimum distance from screen level is up to 35.9 arc-min angular disparity behind the screen plane. A dis-tance of at least 2.7 arc-min difference in angular disparity between the objects significantly decreases time for layer separation. Based on the results we derive design implications

    Literature Reviews in HCI: A Review of Reviews

    Get PDF
    This paper analyses Human-Computer Interaction (HCI) literature reviews to provide a clear conceptual basis for authors, reviewers, and readers. HCI is multidisciplinary and various types of literature reviews exist, from systematic to critical reviews in the style of essays. Yet, there is insufficient consensus of what to expect of literature reviews in HCI. Thus, a shared understanding of literature reviews and clear terminology is needed to plan, evaluate, and use literature reviews, and to further improve review methodology. We analysed 189 literature reviews published at all SIGCHI conferences and ACM Transactions on Computer-Human Interaction (TOCHI) up until August 2022. We report on the main dimensions of variation: (i) contribution types and topics; and (ii) structure and methodologies applied. We identify gaps and trends to inform future meta work in HCI and provide a starting point on how to move towards a more comprehensive terminology system of literature reviews in HCI
    corecore